Last Update: 2025/3/26
Qwen Embedding API
The Qwen Embedding API allows you to generate vector representations of text using OpenAI's SDK. This document provides an overview of the API endpoints, request parameters, and response structure.
Endpoint
POST https://platform.llmprovider.ai/v1/embeddings
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | application/json |
Request Body
The request body should be a JSON object with the following parameters:
Parameter | Type | Description |
---|---|---|
model | string | The model to use (e.g., text-embedding-v3 ). |
input | string | The input text to embed. |
encoding_format | string | (Optional) The format to return the embeddings in. Can be either float or base64 . |
dimensions | integer | (Optional) The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
user | string | (Optional) A unique identifier representing the end-user. |
Example Request
{
"model": "text-embedding-v3",
"input": "The quick brown fox jumps over the lazy dog.",
"user": "user-1234"
}
Response Body
The response body will be a JSON object containing the generated embeddings and other metadata.
Field | Type | Description |
---|---|---|
object | string | The type of object returned, usually embedding . |
data | array | A list of embedding objects. |
model | string | The model used for the embedding. |
usage | object | Token usage statistics for the request. |
Embedding Object
Field | Type | Description |
---|---|---|
index | integer | The index of the embedding in the list of embeddings. |
embedding | array | The embedding vector. |
object | string | The object type, which is always "embedding". |
Example Response
{
"object": "embedding",
"data": [
{
"object": "embedding",
"embedding": [
0.0023064255,
-0.009327292,
...
],
"index": 0
}
],
"model": "text-embedding-v3",
"usage": {
"prompt_tokens": 9,
"total_tokens": 9
}
}
Example Requests
- Shell
- Node.js
- Python
curl -X POST https://platform.llmprovider.ai/v1/embeddings \
-H "Authorization: Bearer $YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-v3",
"encoding_format": "float"
}'
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/embeddings';
const data = {
model: 'text-embedding-v3',
input: 'The food was delicious and the waiter...',
encoding_format: 'float'
};
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
axios.post(url, data, {headers})
.then(response => {
console.log('Response:', response.data);
})
.catch(error => {
console.error('Error:', error);
});
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/embeddings'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
data = {
'model': 'text-embedding-v3',
'input': 'The food was delicious and the waiter...',
'encoding_format': 50
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
print('Response:', response.json())
else:
print('Error:', response.status_code, response.text)
For any questions or further assistance, please contact us at [email protected].